gaussian process model
Variational Inference for Gaussian Process Models with Linear Complexity
Large-scale Gaussian process inference has long faced practical challenges due to time and space complexity that is superlinear in dataset size. While sparse variational Gaussian process models are capable of learning from large-scale data, standard strategies for sparsifying the model can prevent the approximation of complex functions. In this work, we propose a novel variational Gaussian process model that decouples the representation of mean and covariance functions in reproducing kernel Hilbert space. We show that this new parametrization generalizes previous models. Furthermore, it yields a variational inference problem that can be solved by stochastic gradient ascent with time and space complexity that is only linear in the number of mean function parameters, regardless of the choice of kernels, likelihoods, and inducing points. This strategy makes the adoption of large-scale expressive Gaussian process models possible. We run several experiments on regression tasks and show that this decoupled approach greatly outperforms previous sparse variational Gaussian process inference procedures.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- Europe > Austria > Vienna (0.14)
- North America > United States > Arizona > Maricopa County > Phoenix (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Asia > Thailand > Bangkok > Bangkok (0.04)
Variational Inference for Gaussian Process Models with Linear Complexity
Large-scale Gaussian process inference has long faced practical challenges due to time and space complexity that is superlinear in dataset size. While sparse variational Gaussian process models are capable of learning from large-scale data, standard strategies for sparsifying the model can prevent the approximation of complex functions. In this work, we propose a novel variational Gaussian process model that decouples the representation of mean and covariance functions in reproducing kernel Hilbert space. We show that this new parametrization generalizes previous models. Furthermore, it yields a variational inference problem that can be solved by stochastic gradient ascent with time and space complexity that is only linear in the number of mean function parameters, regardless of the choice of kernels, likelihoods, and inducing points. This strategy makes the adoption of large-scale expressive Gaussian process models possible. We run several experiments on regression tasks and show that this decoupled approach greatly outperforms previous sparse variational Gaussian process inference procedures.
Bayesian Nonlinear PDE Inference via Gaussian Process Collocation with Application to the Richards Equation
Yang, Yumo, Bouazza, Anass Ben, Dong, Xuejun, Zhou, Quan
The estimation of unknown parameters in nonlinear partial differential equations (PDEs) offers valuable insights across a wide range of scientific domains. In this work, we focus on estimating plant root parameters in the Richards equation, which is essential for understanding the soil-plant system in agricultural studies. Since conventional methods are computationally intensive and often yield unstable estimates, we develop a new Gaussian process collocation method for efficient Bayesian inference. Unlike existing Gaussian process-based approaches, our method constructs an approximate posterior distribution using samples drawn from a Gaussian process model fitted to the observed data, which does not require any structural assumption about the underlying PDE. Further, we propose to use an importance sampling procedure to correct for the discrepancy between the approximate and true posterior distributions. As an alternative, we also devise a prior-guided Bayesian optimization algorithm leveraging the approximate posterior. Simulation studies demonstrate that our method yields robust estimates under various settings. Finally, we apply our method on a real agricultural data set and estimate the plant root parameters with uncertainty quantification.
- North America > United States > Texas > Brazos County > College Station (0.04)
- North America > United States > Iowa (0.04)
- North America > Canada > Saskatchewan (0.04)
- (2 more...)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- Europe > Austria > Vienna (0.14)
- North America > United States > Arizona > Maricopa County > Phoenix (0.04)
Export Reviews, Discussions, Author Feedback and Meta-Reviews
First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. The authors present a flexible variational inference method geared Gaussian process models with various likelihoods. Specifically, they derive an inference method for models where some fixed number of latent functions (with GP priors that depend on the input covariate) parameterize a likelihood for conditionally independent observations. They use variational inference to obtain the posterior over the latent functions, where the variational family of distributions is taken to be a mixture of Gaussians with some fixed number of components, and some covariance complexity (full, diagonal, block diagonal, etc). The paper derives the standard evidence lower bound (ELBO), which decomposes into a negative KL term and an expected log-likelihood term, and they note some convenient properties of these decompositions (re: optimizing covariance function parameters). This paper is well written, very clear, and technically sound.
- Summary/Review (0.70)
- Research Report > New Finding (0.47)
A Gaussian Process Model of Quasar Spectral Energy Distributions Andrew Miller
We propose a method for combining two sources of astronomical data, spectroscopy and photometry, that carry information about sources of light (e.g., stars, galaxies, and quasars) at extremely different spectral resolutions. Our model treats the spectral energy distribution (SED) of the radiation from a source as a latent variable that jointly explains both photometric and spectroscopic observations. We place a flexible, nonparametric prior over the SED of a light source that admits a physically interpretable decomposition, and allows us to tractably perform inference. We use our model to predict the distribution of the redshift of a quasar from five-band (low spectral resolution) photometric data, the so called "photo-z" problem. Our method shows that tools from machine learning and Bayesian statistics allow us to leverage multiple resolutions of information to make accurate predictions with well-characterized uncertainties.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > Maryland (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)